Ensembling as a Defense Against Adversarial Examples
ثبت نشده
چکیده
Adversarial attacks on machine learning systems take two main flavors. First, there are trainingtime attacks, which involve compromising the training data that the system is trained on. Unsurprisingly, machines can misclassify examples, if they are trained on malicious data. Second, there are test-time attacks, which involve crafting an adversarial example, which a human would easily classify as some class A, whereas the system erroneously classifies it as some different class B [1]. For example, both images in Figure 1 are easily classified by humans to be a stop sign. In the adversarial setting, some classifier may misclassify the image on the right to be a speed limit 45 sign instead.
منابع مشابه
APE-GAN: Adversarial Perturbation Elimination with GAN
Although neural networks could achieve state-of-the-art performance while recongnizing images, they often suffer a tremendous defeat from adversarial examples–inputs generated by utilizing imperceptible but intentional perturbation to clean samples from the datasets. How to defense against adversarial examples is an important problem which is well worth researching. So far, very few methods hav...
متن کاملMagNet and "Efficient Defenses Against Adversarial Attacks" are Not Robust to Adversarial Examples
MagNet and “Efficient Defenses...” were recently proposed as a defense to adversarial examples. We find that we can construct adversarial examples that defeat these defenses with only a slight increase in distortion.
متن کاملSpatially Transformed Adversarial Examples
Recent studies show that widely used deep neural networks (DNNs) are vulnerable to carefully crafted adversarial examples. Many advanced algorithms have been proposed to generate adversarial examples by leveraging the Lp distance for penalizing perturbations. Researchers have explored different defense methods to defend against such adversarial attacks. While the effectiveness of Lp distance as...
متن کاملAdversarial Example Defense: Ensembles of Weak Defenses are not Strong
Ongoing research has proposed several methods to defend neural networks against adversarial examples, many of which researchers have shown to be ineffective. We ask whether a strong defense can be created by combining multiple (possibly weak) defenses. To answer this question, we study three defenses that follow this approach. Two of these are recently proposed defenses that intentionally combi...
متن کاملAdversarial Example Defenses: Ensembles of Weak Defenses are not Strong
Ongoing research has proposed several methods to defend neural networks against adversarial examples, many of which researchers have shown to be ineffective. We ask whether a strong defense can be created by combining multiple (possibly weak) defenses. To answer this question, we study three defenses that follow this approach. Two of these are recently proposed defenses that intentionally combi...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2016